40 research outputs found

    Guiding object recognition: a shape model with co-activation networks

    Get PDF
    The goal of image understanding research is to develop techniques to automatically extract meaningful information from a population of images. This abstract goal manifests itself in a variety of application domains. Video understanding is a natural extension of image understanding. Many video understanding algorithms apply static-image algorithms to successive frames to identify patterns of consistency. This consumes a significant amount of irrelevant computation and may have erroneous results because static algorithms are not designed to indicate corresponding pixel locations between frames. Video is more than a collection of images, it is an ordered collection of images that exhibits temporal coherence, which is an additional feature like edges, colors, and textures. Motion information provides another level of visual information that can not be obtained from an isolated image. Leveraging motion cues prevents an algorithm from ?starting fresh? at each frame by focusing the region of attention. This approach is analogous to the attentional system of the human visual system. Relying on motion information alone is insufficient due to the aperture problem, where local motion information is ambiguous in at least one direction. Consequently, motion cues only provide leading and trailing motion edges and bottom-up approaches using gradient or region properties to complete moving regions are limited. Object recognition facilitates higher-level processing and is an integral component of image understanding. We present a components-based object detection and localization algorithm for static images. We show how this same system provides top-down segmentation for the detected object. We present a detailed analysis of the model dynamics during the localization process. This analysis shows consistent behavior in response to a variety of input, permitting model reduction and a substantial speed increase with little or no performance degradation. We present four specific enhancements to reduce false positives when instances of the target category are not present. First, a one-shot rule is used to discount coincident secondary hypotheses. Next, we demonstrate that the use of an entire shape model is inappropriate to localize any single instance and introduce the use of co-activation networks to represent the appropriate component relations for a particular recognition context. Next, we describe how the co-activation network can be combined with motion cues to overcome the aperture problem by providing context-specific, top-down shape information to achieve detection and segmentation in video. Finally, we present discriminating features arising from these enhancements and apply supervised learning techniques to embody the informational contribution of each approach to associate a confidence measure with each detection

    A New Perspective on Coastally Trapped Disturbances Using Data from the Satellite Era

    Get PDF
    The ability of global climate models to simulate accurately marine stratiform clouds continues to challenge the atmospheric science community. These cloud types, which account for a large uncertainty in Earth’s radiation budget, are generally difficult to characterize due to their shallowness and spatial inhomogeneity. Previous work investigating marine boundary layer (MBL) clouds off the California coast has focused on clouds that form under the typical northerly flow regime during the boreal warm season. From about June through September, however, these northerly winds may reverse and become southerly as part of a coastally trapped disturbance (CTD). As the flow surges northward, it is accompanied by a broad cloud deck. Because these events are difficult to forecast, in situ observations of CTDs are few and far between, and little is known about their cloud physical properties. A climatological perspective of 23 CTD events—spanning the years from 2004 to 2016—is presented using several data products, including model reanalyses, buoys, and satellites. For the first time, satellite retrievals suggest that CTD cloud decks may play a unique role in the radiation budget due to a combination of aerosol sources that enhance cloud droplet number concentration and reduce cloud droplet effective radius. This particular type of cloud regime should therefore be treated differently than that which is more commonly found in the summertime months over the northeast Pacific Ocean. The potential influence of a coherent wind stress cycle on sea surface temperatures and sea salt aerosol is also explored

    Marine Boundary Layer Clouds Associated with Coastally Trapped Disturbances: Observations and Model Simulations

    Get PDF
    This work has been accepted to Journal of Atmospheric Sciences. The AMS does not guarantee that the copy provided here is an accurate copy of the final published work.Modeling marine low clouds and fog in coastal environments remains an outstanding challenge due to the inherently complex ocean–land–atmosphere system. This is especially important in the context of global circulation models due to the profound radiative impact of these clouds. This study utilizes aircraft and satellite measurements, in addition to numerical simulations using the Weather Research and Forecasting (WRF) Model, to examine three well-observed coastally trapped disturbance (CTD) events from June 2006, July 2011, and July 2015. Cloud water-soluble ionic and elemental composition analyses conducted for two of the CTD cases indicate that anthropogenic aerosol sources may impact CTD cloud decks due to synoptic-scale patterns associated with CTD initiation. In general, the dynamics and thermodynamics of the CTD systems are well represented and are relatively insensitive to the choice of physics parameterizations; however, a set of WRF simulations suggests that the treatment of model physics strongly influences CTD cloud field evolution. Specifically, cloud liquid water path (LWP) is highly sensitive to the choice of the planetary boundary layer (PBL) scheme; in many instances, the PBL scheme affects cloud extent and LWP values as much as or more than the microphysics scheme. Results suggest that differences in the treatment of entrainment and vertical mixing in the Yonsei University (nonlocal) and Mellor–Yamada–Janjić (local) PBL schemes may play a significant role. The impact of using different driving models—namely, the North American Mesoscale Forecast System (NAM) 12-km analysis and the NCEP North American Regional Reanalysis (NARR) 32-km products—is also investigated

    Marine Boundary Layer Clouds Associated with Coastally Trapped Disturbances: Observations and Model Simulations

    Get PDF
    Modeling marine low clouds and fog in coastal environments remains an outstanding challenge due to the inherently complex ocean–land–atmosphere system. This is especially important in the context of global circulation models due to the profound radiative impact of these clouds. This study utilizes aircraft and satellite measurements, in addition to numerical simulations using the Weather Research and Forecasting (WRF) Model, to examine three well-observed coastally trapped disturbance (CTD) events from June 2006, July 2011, and July 2015. Cloud water-soluble ionic and elemental composition analyses conducted for two of the CTD cases indicate that anthropogenic aerosol sources may impact CTD cloud decks due to synoptic-scale patterns associated with CTD initiation. In general, the dynamics and thermodynamics of the CTD systems are well represented and are relatively insensitive to the choice of physics parameterizations; however, a set of WRF simulations suggests that the treatment of model physics strongly influences CTD cloud field evolution. Specifically, cloud liquid water path (LWP) is highly sensitive to the choice of the planetary boundary layer (PBL) scheme; in many instances, the PBL scheme affects cloud extent and LWP values as much as or more than the microphysics scheme. Results suggest that differences in the treatment of entrainment and vertical mixing in the Yonsei University (nonlocal) and Mellor–Yamada–Janjić (local) PBL schemes may play a significant role. The impact of using different driving models—namely, the North American Mesoscale Forecast System (NAM) 12-km analysis and the NCEP North American Regional Reanalysis (NARR) 32-km products—is also investigated

    The Translational Medicine Ontology and Knowledge Base: driving personalized medicine by bridging the gap between bench and bedside

    Get PDF
    Background: Translational medicine requires the integration of knowledge using heterogeneous data from health care to the life sciences. Here, we describe a collaborative effort to produce a prototype Translational Medicine Knowledge Base (TMKB) capable of answering questions relating to clinical practice and pharmaceutical drug discovery. Results: We developed the Translational Medicine Ontology (TMO) as a unifying ontology to integrate chemical, genomic and proteomic data with disease, treatment, and electronic health records. We demonstrate the use of Semantic Web technologies in the integration of patient and biomedical data, and reveal how such a knowledge base can aid physicians in providing tailored patient care and facilitate the recruitment of patients into active clinical trials. Thus, patients, physicians and researchers may explore the knowledge base to better understand therapeutic options, efficacy, and mechanisms of action. Conclusions: This work takes an important step in using Semantic Web technologies to facilitate integration of relevant, distributed, external sources and progress towards a computational platform to support personalized medicine. Availability: TMO can be downloaded from http://code.google.com/p/translationalmedicineontology and TMKB can be accessed at http://tm.semanticscience.org/sparql

    The Rationale of PROV

    Get PDF
    The PROV family of documents are the final output of the World Wide Web Consortium Provenance Working Group, chartered to specify a representation of provenance to facilitate its exchange over the Web. This article reflects upon the key requirements, guiding principles, and design decisions that influenced the PROV family of documents. A broad range of requirements were found, relating to the key concepts necessary for describing provenance, such as resources, activities, agents and events, and to balancing prov’s ease of use with the facility to check its validity. By this retrospective requirement analysis, the article aims to provide some insights into how prov turned out as it did and why. Benefits of this insight include better inter-operability, a roadmap for alternate investigations and improvements, and solid foundations for future standardization activities

    An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge

    Get PDF
    There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. RESULTS: A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. CONCLUSIONS: The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups
    corecore